08. Pre-Notebook: Style Transfer
Notebook: Style Transfer
Now, you're ready to implement style transfer and apply it using your own images!
It's suggested that you open the notebook in a new, working tab and continue working on it as you go through the instructional videos in this tab. This way you can toggle between learning new skills and coding/applying new skills.
To open this notebook, you have two options:
- Go to the next page in the classroom (recommended).
- Clone the repo from Github and open the notebook Style_Transfer_Exercise.ipynb in the style-transfer folder. You can either download the repository with
git clone https://github.com/udacity/deep-learning-v2-pytorch.git, or download it as an archive file from this link.
Instructions
- Load in a pre-trained VGG Net
- Freeze the weights in selected layers, so that the model can be used as a fixed feature extractor
- Load in content and style images
- Extract features from different layers of our model
- Complete a function to calculate the gram matrix of a given convolutional layer
- Define the content, style, and total loss for iteratively updating a target image
This is a self-assessed lab. If you need any help or want to check your answers, feel free to check out the solutions notebook in the same folder, or by clicking here.
GPU Workspaces
The next workspace is GPU-enabled, which means you can select to train on a GPU instance. The recommendation is this:
- Load in and test functions while in CPU (non-enabled) mode
- When you're ready to generate a style-transferred image, enable GPU to quickly iterate and generate this image!
All models and data they see as input will have to be moved to the GPU device, so take note of the relevant movement code in the model creation and training process.